Scaling hasn’t gotten us to AGI, or ‘superintelligence’, let alone AI we could trust. What do we do next?
Pointer to a new essay by the author, in The New York Times
Scaling hasn’t gotten us to AGI, or ‘superintelligence”, let alone AI we could trust.
The field is overdue for a rethink.
What do we next?
Three ideas, drawn from the cognitive sciences, in a new essay by yours truly. at The New York Times, gift link here: https://t.co/H9TAwHWplq
Keep going Gary. You're the boy telling the AI industry it's wearing no clothes.
I'm glad to see that your views are being more widely diseminated, but I'm afraid that much of the audience just won't get it.
You wrote, "Even significantly scaled, they still don’t fully understand the concepts they are exposed to — which is why they sometimes botch answers or generate ridiculously incorrect drawings."
I'm concerned that the average reader might take this to mean that AI's partially understand concepts. As I'm sure you know, computers don't "understand" anything -- at least not in the way the word is usually used.